Skip to content

feat: plot vllm internal metrics to the wandb log#1567

Merged
terrykong merged 8 commits intomainfrom
youngeunk/vllm-wandb-plot
Dec 3, 2025
Merged

feat: plot vllm internal metrics to the wandb log#1567
terrykong merged 8 commits intomainfrom
youngeunk/vllm-wandb-plot

Conversation

@youngeunkwon0405
Copy link
Copy Markdown
Contributor

@youngeunkwon0405 youngeunkwon0405 commented Nov 25, 2025

What does this PR do ?

image

Issues

List issues that this PR closes (syntax):

Usage

  • You can potentially add a usage example below
# Add a code snippet demonstrating how to use this

Before your PR is "Ready for review"

Pre checks:

  • Make sure you read and followed Contributor guidelines
  • Did you write any new necessary tests?
  • Did you run the unit tests and functional tests locally? Visit our Testing Guide for how to run tests
  • Did you add or update any necessary documentation? Visit our Document Development Guide for how to write, build and test the docs.

Additional Information

  • ...

Summary by CodeRabbit

  • New Features
    • New vLLM metrics tracked and logged: KV cache usage percentage, preemption count, and generation tokens are now collected and visible in monitoring dashboards when metrics logging is configured
    • Per-worker timeline visualization displays granular per-worker metric data as individual series on shared plots with synchronized time-axis representation

✏️ Tip: You can customize this high-level summary in your review settings.

@youngeunkwon0405 youngeunkwon0405 self-assigned this Nov 25, 2025
@youngeunkwon0405 youngeunkwon0405 requested review from a team as code owners November 25, 2025 17:51
@youngeunkwon0405 youngeunkwon0405 marked this pull request as draft November 25, 2025 17:51
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Nov 25, 2025

📝 Walkthrough

Walkthrough

The changes extend vLLM metrics collection to include three new metrics (kv_cache_usage_perc, num_preemptions, generation_tokens), add a per-worker timeline visualization utility to the logger, create a helper function to log these metrics to wandb, and integrate the logging into the GRPO training loop when configured.

Changes

Cohort / File(s) Summary
vLLM metrics collection
nemo_rl/models/generation/vllm/vllm_generation.py, nemo_rl/models/generation/vllm/vllm_worker_async.py
Added support for three new vLLM metrics: kv_cache_usage_perc (float), num_preemptions (int), and generation_tokens (int). Extended metrics initialization, accumulation, and return types to include these new fields.
Logging utilities
nemo_rl/algorithms/utils.py
Added new public function log_vllm_metrics_to_wandb that iterates over vLLM metrics and logs them per-worker via the Logger instance.
Logger enhancement
nemo_rl/utils/logger.py
Added new public method log_plot_per_worker_timeline_metrics that constructs per-worker time-series plots and delegates logging to all configured backends.
GRPO training integration
nemo_rl/algorithms/grpo.py
Imported log_vllm_metrics_to_wandb and added conditional calls to log vLLM metrics to wandb when configured in both synchronous and asynchronous training paths.

Sequence Diagram

sequenceDiagram
    participant GRPO as GRPO Training
    participant VllmWorker as VllmAsyncGenerationWorker
    participant MetricsUtil as metrics_utils
    participant Logger as Logger
    participant Wandb as W&B Backend

    GRPO->>VllmWorker: Collect vLLM metrics
    VllmWorker->>VllmWorker: Track kv_cache_usage_perc,<br/>num_preemptions,<br/>generation_tokens
    VllmWorker->>GRPO: get_vllm_logger_metrics()
    
    GRPO->>MetricsUtil: log_vllm_metrics_to_wandb()
    MetricsUtil->>Logger: log_plot_per_worker_timeline_metrics()
    
    Logger->>Logger: Construct per-worker<br/>time-series plots
    Logger->>Wandb: log_plot()
    Wandb->>Wandb: Visualize metrics
Loading

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~12 minutes

  • The changes follow consistent, repetitive patterns—the same three metrics are added across multiple files in similar ways
  • Most edits are additive (new functions, new attributes, new logging calls) with no complex logic changes
  • The new utility functions are straightforward with predictable control flow
  • Review focus: verify consistency of metric naming and types across all collection and logging points, confirm proper integration in both sync and async training paths

Possibly related PRs

  • PR #1534: Continues and extends the same per-worker vLLM metrics collection and visualization work, including similar changes to vLLM worker metrics and logging utilities integration.

Suggested labels

enhancement

Suggested reviewers

  • terrykong

Pre-merge checks and finishing touches

❌ Failed checks (2 warnings)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 71.43% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
Test Results For Major Changes ⚠️ Warning PR introduces major new features (new public methods, functions, and attributes for vLLM metrics plotting) but lacks test results or testing documentation. Add test results and testing information documenting unit tests, integration tests, and verification that no regressions were introduced.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title accurately describes the main change: adding functionality to plot vLLM internal metrics to wandb, which aligns with all the file modifications across the codebase.
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch youngeunk/vllm-wandb-plot

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (4)
nemo_rl/models/generation/vllm/vllm_generation.py (1)

841-847: vLLM metrics dict looks consistent; consider tightening return type annotation

The new metrics (kv_cache_usage_perc, num_preemptions, generation_tokens) are wired consistently with the existing per‑DP stats. Since the structure is now always dict[str, dict[int, list[Any]]], you could update get_vllm_logger_metrics’s return type from dict[str, Any] to this more precise type to align with log_vllm_metrics_to_wandb and catch mismatches earlier in type-checking.

Also applies to: 860-868

nemo_rl/algorithms/grpo.py (1)

40-45: Config‑gated vLLM plotting is wired correctly; consider hardening against empty/None metrics

The new log_vllm_metrics_to_wandb import and both call sites are correctly gated on:

  • policy.generation.vllm_cfg.enable_vllm_metrics_logger
  • logger.wandb_enabled

and reuse the same vllm_metrics_logger_interval that drives collection. This keeps logging overhead opt‑in and consistent.

To make this more future‑proof (especially in the async path where vllm_logger_metrics starts as None and is only set under NEED_REFIT), it would be safer if log_vllm_metrics_to_wandb itself returned early when passed a falsy/empty metrics object. That way, these call sites remain simple and won’t break if NEED_REFIT or initialization order changes later.

Also applies to: 1468-1478, 2378-2388

nemo_rl/algorithms/utils.py (1)

31-32: Guard vLLM metrics helper against empty/None input and clarify docstring scope

The helper is correctly wired to the logger’s per‑worker timeline plotting and matches the structure produced by get_vllm_logger_metrics. Two small improvements would make it more robust and clearer:

  1. Early return on empty/falsy metrics

If vllm_logger_metrics is {} (or accidentally None), the current loop will still run and try membership checks. A cheap guard avoids that and protects future call‑site refactors:

 def log_vllm_metrics_to_wandb(
@@
-    """Log vLLM metrics to wandb.
+    """Log vLLM metrics as per-worker timelines via the shared Logger.
@@
-    """
-    vllm_metrics_to_plot = [
+    """
+    if not vllm_logger_metrics:
+        return
+
+    vllm_metrics_to_plot = [
  1. Docstring wording

The function uses the generic Logger interface and logs to all configured backends (not just wandb). The updated first line above makes that intent clearer while still matching how it’s used (gated on wandb_enabled in grpo.py).

Also applies to: 750-779

nemo_rl/models/generation/vllm/vllm_worker_async.py (1)

170-187: Metric collection for new vLLM fields is thread‑safe and consistent; update comment to match behavior

The extended metric collection looks correct:

  • All five series (inflight_batch_sizes, num_pending_samples, kv_cache_usage_perc, num_preemptions, generation_tokens) are appended under _vllm_metrics_lock and read/cleared under the same lock, avoiding races.
  • get_vllm_logger_metrics/clear_vllm_logger_metrics now expose and reset the new fields in a way that matches how the driver aggregates and logs them.

One small nit: the comment

# Lazy import inside thread target to avoid import overhead if disabled

no longer reflects the code, since the import now lives at the top of _start_vllm_metrics_logger rather than inside the thread target. Consider rewording to something like “Lazy import inside the metrics logger setup to avoid module‑level overhead when disabled” to keep future readers aligned with the actual behavior.

Also applies to: 191-196, 197-221, 243-250, 257-262

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 6639a40 and 19c1d2d.

📒 Files selected for processing (5)
  • nemo_rl/algorithms/grpo.py (3 hunks)
  • nemo_rl/algorithms/utils.py (2 hunks)
  • nemo_rl/models/generation/vllm/vllm_generation.py (2 hunks)
  • nemo_rl/models/generation/vllm/vllm_worker_async.py (4 hunks)
  • nemo_rl/utils/logger.py (1 hunks)
🧰 Additional context used
📓 Path-based instructions (4)
**/*.py

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

**/*.py: Conform code to Python 3.12+
Indent code with 4 spaces. Do not use tabs
Use snake_case for file names
Use PascalCase for class names
Use snake_case for function and method names
Use snake_case for local variables
Prefix variable names that start with a number with 'k' (e.g., k_99th_percentile)
Use upper snake_case with 'G' prefix for global variables (e.g., G_MY_GLOBAL)
Use upper snake_case for constants
Avoid shadowing variables declared in an outer scope
Initialize all externally visible members of a class in the constructor
Prefer docstrings over comments for interfaces that may be used outside a file
Reserve comments for code within a function or interfaces that are local to a file
If a piece of code is commented out, include a comment describing its usage and why it's commented out. Remove debug comments before merging
Use Google style docstrings for classes and functions in Python, which can be parsed by Sphinx
Avoid using reflection when functionality can be easily achieved without reflection
When using try-except blocks, limit the except clause to the smallest set of specific errors possible
When using try-except blocks for duck-typing, keep the body of the try as small as possible and use the else block for logic
YAML is the single source of truth for configuration defaults. Do not set non-None defaults in code for configuration values
For required configuration attributes, access config directly and expect presence (e.g., policy_cfg['precision']) without hidden defaults
Use typing.NotRequired to mark optional attributes in TypedDict for configuration
When adding a new config key to a TypedDict subclass, document the key's purpose, valid values/types, and recommended default, and reflect the default in exemplar YAMLs under examples/configs/*.yaml
Follow the Google Python Style Guide for Python code

Files:

  • nemo_rl/algorithms/utils.py
  • nemo_rl/models/generation/vllm/vllm_worker_async.py
  • nemo_rl/models/generation/vllm/vllm_generation.py
  • nemo_rl/algorithms/grpo.py
  • nemo_rl/utils/logger.py
nemo_rl/**/*.py

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

For any source file under nemo_rl/*.py that defines a class or function decorated with @ray.remote, add a coverage pragma (# pragma: no cover) because these run in separate Ray processes

Files:

  • nemo_rl/algorithms/utils.py
  • nemo_rl/models/generation/vllm/vllm_worker_async.py
  • nemo_rl/models/generation/vllm/vllm_generation.py
  • nemo_rl/algorithms/grpo.py
  • nemo_rl/utils/logger.py
!(**/tests/**|**/test_*.py|**/test_*.sh)

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

Add the NVIDIA copyright header to all Python files and shell scripts (excluding tests). The header should include the current year

Files:

  • nemo_rl/algorithms/utils.py
  • nemo_rl/models/generation/vllm/vllm_worker_async.py
  • nemo_rl/models/generation/vllm/vllm_generation.py
  • nemo_rl/algorithms/grpo.py
  • nemo_rl/utils/logger.py
**/*.{py,sh}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

The NVIDIA copyright header should appear at the top of all Python files and shell scripts (excluding tests)

Files:

  • nemo_rl/algorithms/utils.py
  • nemo_rl/models/generation/vllm/vllm_worker_async.py
  • nemo_rl/models/generation/vllm/vllm_generation.py
  • nemo_rl/algorithms/grpo.py
  • nemo_rl/utils/logger.py
🧬 Code graph analysis (2)
nemo_rl/algorithms/utils.py (1)
nemo_rl/utils/logger.py (2)
  • Logger (804-1102)
  • log_plot_per_worker_timeline_metrics (938-997)
nemo_rl/algorithms/grpo.py (1)
nemo_rl/algorithms/utils.py (1)
  • log_vllm_metrics_to_wandb (750-779)
🪛 Ruff (0.14.5)
nemo_rl/utils/logger.py

962-964: Avoid specifying long messages outside the exception class

(TRY003)


986-986: zip() without an explicit strict= parameter

Add explicit value for parameter strict=

(B905)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
  • GitHub Check: Lint check
  • GitHub Check: Post submodule check comment / Comment on PR
  • GitHub Check: Post automodel integration comment / Comment on PR

@youngeunkwon0405
Copy link
Copy Markdown
Contributor Author

Hi @terrykong, this is the PR that enables wandb plot what I shared today.
Can I ask for your reviews please?

Copy link
Copy Markdown
Collaborator

@terrykong terrykong left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nice addition! left some comments

@youngeunkwon0405 youngeunkwon0405 added CI:L1 Run doctests, unit tests, and functional tests and removed CI:L1 Run doctests, unit tests, and functional tests labels Nov 26, 2025
@youngeunkwon0405 youngeunkwon0405 removed the CI:L1 Run doctests, unit tests, and functional tests label Nov 26, 2025
@youngeunkwon0405 youngeunkwon0405 requested a review from a team as a code owner November 26, 2025 19:50
@youngeunkwon0405 youngeunkwon0405 added the CI:L1 Run doctests, unit tests, and functional tests label Nov 26, 2025
Signed-off-by: Youngeun Kwon <youngeunk@nvidia.com>

using wandb native api

Signed-off-by: Youngeun Kwon <youngeunk@nvidia.com>

not to use native api

Signed-off-by: Youngeun Kwon <youngeunk@nvidia.com>

fix legend

Signed-off-by: Youngeun Kwon <youngeunk@nvidia.com>

remove legend

Signed-off-by: Youngeun Kwon <youngeunk@nvidia.com>
Signed-off-by: Youngeun Kwon <youngeunk@nvidia.com>
Signed-off-by: Youngeun Kwon <youngeunk@nvidia.com>
Signed-off-by: Youngeun Kwon <youngeunk@nvidia.com>
Signed-off-by: Youngeun Kwon <youngeunk@nvidia.com>
Signed-off-by: Youngeun Kwon <youngeunk@nvidia.com>
Signed-off-by: Youngeun Kwon <youngeunk@nvidia.com>
Signed-off-by: Youngeun Kwon <youngeunk@nvidia.com>
@youngeunkwon0405 youngeunkwon0405 force-pushed the youngeunk/vllm-wandb-plot branch from 91dac96 to 7e35f63 Compare December 2, 2025 17:46
@youngeunkwon0405 youngeunkwon0405 added CI:L1 Run doctests, unit tests, and functional tests and removed CI:L1 Run doctests, unit tests, and functional tests labels Dec 2, 2025
@terrykong terrykong enabled auto-merge (squash) December 2, 2025 20:58
@terrykong terrykong merged commit edd5e7a into main Dec 3, 2025
40 of 41 checks passed
@terrykong terrykong deleted the youngeunk/vllm-wandb-plot branch December 3, 2025 05:01
DeL-TaiseiOzaki pushed a commit to DeL-TaiseiOzaki/RL that referenced this pull request Jan 8, 2026
Signed-off-by: Youngeun Kwon <youngeunk@nvidia.com>
yuanhangsu1986 pushed a commit to yuanhangsu1986/RL-Nemontron-Edge-Omni that referenced this pull request Feb 21, 2026
Signed-off-by: Youngeun Kwon <youngeunk@nvidia.com>
Signed-off-by: yuanhangs <yuanhangs@nvidia.com>
seonjinn pushed a commit that referenced this pull request Mar 8, 2026
Signed-off-by: Youngeun Kwon <youngeunk@nvidia.com>
seonjinn pushed a commit that referenced this pull request Mar 8, 2026
Signed-off-by: Youngeun Kwon <youngeunk@nvidia.com>
seonjinn pushed a commit that referenced this pull request Mar 9, 2026
Signed-off-by: Youngeun Kwon <youngeunk@nvidia.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CI:L1 Run doctests, unit tests, and functional tests ease of use

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants